perm filename PSYCHO.XGP[E83,JMC]1 blob sn#719042 filedate 1983-07-16 generic text, type T, neo UTF8
/LMAR=0/XLINE=3/FONT#0=BAXL30/FONT#1=BAXM30/FONT#2=BASB30/FONT#3=SUB/FONT#4=SUP/FONT#5=BASL35/FONT#6=NGR25/FONT#7=MATH30/FONT#8=FIX25/FONT#9=GRKB30
␈↓ α∧␈↓␈↓ u1


␈↓ α∧␈↓α␈↓ β␈THE LITTLE THOUGHTS OF THINKING MACHINES

␈↓ α∧␈↓␈↓ αTWhen␈α∂we␈α∂interact␈α∂with␈α∂computers␈α∂and␈α∂other␈α∂machines,␈α∂we␈α∂often␈α∂use␈α∂language␈α∂ordinarily

␈↓ α∧␈↓used␈αfor␈αtalking␈αabout␈αpeople.␈αWe␈αmay␈αsay␈αof␈αa␈αvending␈αmachine,␈α"It␈αwants␈αanother␈αten␈αcents␈αfor

␈↓ α∧␈↓a␈α
lousy␈α
candy␈α
bar."␈α
We␈α
may␈α
say␈α
of␈α∞an␈α
automatic␈α
teller␈α
machine,␈α
"It␈α
thinks␈α
I␈α
don't␈α∞have␈α
enough

␈↓ α∧␈↓money␈α
in␈α
my␈α
account␈α
because␈α
it␈α
doesn't␈α
yet␈α
know␈α
about␈α
the␈α
deposit␈α
I␈α
made␈α
this␈α∞morning."␈α
This

␈↓ α∧␈↓article␈αis␈αabout␈αwhen␈αwe're␈αright␈αor␈αalmost␈αright␈αin␈αsaying␈αthese␈αthings,␈αand␈αwhen␈αit's␈αa␈αgood␈αidea

␈↓ α∧␈↓to think of machines that way.

␈↓ α∧␈↓␈↓ αTFor␈αalmost␈αa␈α
century␈αwe␈αhave␈α
used␈αmachines␈αin␈αour␈α
daily␈αlives␈αwhose␈α
detailed␈αfunctioning

␈↓ α∧␈↓most␈α∞of␈α∞us␈α∂don't␈α∞understand.␈α∞Few␈α∂of␈α∞us␈α∞know␈α∞much␈α∂about␈α∞how␈α∞the␈α∂electric␈α∞light␈α∞system␈α∂or␈α∞the

␈↓ α∧␈↓telephone␈α
system␈α∞work␈α
internally.␈α
We␈α∞do␈α
know␈α
their␈α∞external␈α
behavior;␈α
we␈α∞know␈α
that␈α∞lights␈α
are

␈↓ α∧␈↓turned␈α
on␈α
and␈αoff␈α
by␈α
switches␈αand␈α
how␈α
to␈α
dial␈αtelephone␈α
numbers.␈α
We␈αmay␈α
not␈α
know␈αmuch␈α
about

␈↓ α∧␈↓the␈α∞internal␈α∞combustion␈α∞engine,␈α∞but␈α∞we␈α∞know␈α∞that␈α∞a␈α∞car␈α∞needs␈α∞more␈α∞gas␈α∞when␈α∞the␈α∂gauge␈α∞reads

␈↓ α∧␈↓near EMPTY.

␈↓ α∧␈↓␈↓ αTIn␈α∞the␈α∞next␈α∂century␈α∞we'll␈α∞be␈α∂increasingly␈α∞faced␈α∞with␈α∂much␈α∞more␈α∞complex␈α∂computer␈α∞based

␈↓ α∧␈↓systems.␈α
It␈α
won't␈α
be␈α
necessary␈α
for␈α
many␈α
people␈α
to␈α
know␈α
very␈α
much␈α
about␈α
how␈α
they␈αwork␈α
internally,

␈↓ α∧␈↓but␈αwhat␈αwe␈αwill␈αhave␈αto␈α
know␈αabout␈αthem␈αin␈αorder␈αto␈α
use␈αthem␈αis␈αmore␈αcomplex␈αthan␈α
what␈αwe

␈↓ α∧␈↓need␈α∃to␈α∃know␈α∃about␈α∃electric␈α∃lights␈α∃and␈α∀telephones.␈α∃As␈α∃our␈α∃daily␈α∃lives␈α∃involve␈α∃ever␈α∀more

␈↓ α∧␈↓sophisticated␈αcomputers,␈α
we␈αwill␈αfind␈α
that␈αascribing␈α
little␈αthoughts␈αto␈α
machines␈αwill␈αbe␈α
increasingly

␈↓ α∧␈↓useful in understanding how to get the most good out of them.

␈↓ α∧␈↓␈↓ αTA␈α∞lot␈α
of␈α∞what␈α∞we'll␈α
have␈α∞to␈α
know␈α∞concerns␈α∞the␈α
information␈α∞stored␈α
in␈α∞computers,␈α∞which␈α
is

␈↓ α∧␈↓why␈αwe␈α
find␈αourselves␈α
using␈αpsychological␈α
words␈αlike␈α
"knows",␈α"thinks",␈α
and␈α"wants"␈α
in␈αreferring

␈↓ α∧␈↓to␈αmachines,␈αeven␈αthough␈α
machines␈αare␈αvery␈αdifferent␈αfrom␈α
humans␈αand␈αthese␈αwords␈α
arose␈αfrom

␈↓ α∧␈↓the human need to talk about other humans.

␈↓ α∧␈↓␈↓ αTAccording␈αto␈α
some␈αauthorities,␈α
to␈αuse␈α
these␈αwords,␈α
the␈αlanguage␈α
of␈αthe␈α
mind,␈αto␈α
talk␈αabout
␈↓ α∧␈↓␈↓ u2


␈↓ α∧␈↓machines␈αis␈αto␈αcommit␈α
the␈αintellectual␈αsin␈αof␈α
anthropomorphism.␈αAnthropomorphism␈αcan␈αbe␈αa␈α
sin,

␈↓ α∧␈↓all␈αright,␈αbut␈αit␈αis␈αgoing␈αto␈αbe␈αincreasingly␈αdifficult␈αto␈αunderstand␈αmachines␈αwithout␈αusing␈αmental

␈↓ α∧␈↓terms.

␈↓ α∧␈↓␈↓ αTEver␈α⊂since␈α⊂Descartes,␈α⊃philosophically␈α⊂minded␈α⊂people␈α⊂have␈α⊃wrestled␈α⊂with␈α⊂the␈α⊃question␈α⊂of

␈↓ α∧␈↓whether␈αit␈α
is␈αpossible␈α
for␈αmachines␈α
to␈αthink.␈α
As␈αwe␈α
interact␈αmore␈α
and␈αmore␈α
with␈αcomputers␈α-␈α
both

␈↓ α∧␈↓personal␈αcomputers␈αand␈αothers␈α
-␈αthe␈αquestions␈αof␈αwhether␈α
machines␈αcan␈αthink␈αand␈αwhat␈α
kind␈αof

␈↓ α∧␈↓thoughts␈α∞they␈α∞can␈α∞have␈α∂become␈α∞ever␈α∞more␈α∞pertinent.␈α∂We␈α∞can␈α∞ask␈α∞whether␈α∂machines␈α∞remember,

␈↓ α∧␈↓believe,␈α∂know,␈α∞intend,␈α∂like␈α∂or␈α∞dislike,␈α∂want,␈α∂understand,␈α∞promise,␈α∂owe,␈α∂have␈α∞rights␈α∂or␈α∂duties,␈α∞or

␈↓ α∧␈↓deserve␈α⊃rewards␈α⊃or␈α⊃punishment.␈α⊃Is␈α⊃this␈α⊃an␈α⊃all-or-nothing␈α⊃question,␈α⊃or␈α⊃can␈α⊃we␈α⊃say␈α⊃that␈α⊃some

␈↓ α∧␈↓machines do some of these things and not others, or that they do them to some extent?

␈↓ α∧␈↓␈↓ αTMy␈α∞answer,␈α∂based␈α∞on␈α∞some␈α∂30␈α∞years␈α∞working␈α∂in␈α∞the␈α∞field␈α∂of␈α∞artificial␈α∞intelligence,␈α∂is␈α∞that

␈↓ α∧␈↓machines␈α∞have␈α∞some␈α∞intellectual␈α∞qualities␈α∞to␈α∞differing␈α∞extents.␈α∞Even␈α∞some␈α∞very␈α∂simple␈α∞machines

␈↓ α∧␈↓can␈α∞be␈α∞usefully␈α∞regarded␈α∞as␈α∞having␈α∞some␈α∞intellectual␈α∞qualities.␈α∞Machines␈α∞can␈α∞and␈α∞will␈α∂be␈α∞given

␈↓ α∧␈↓more␈α∞and␈α∞more␈α∞intellectual␈α∂qualities;␈α∞not␈α∞even␈α∞human␈α∂intelligence␈α∞is␈α∞a␈α∞limit.␈α∂However,␈α∞artificial

␈↓ α∧␈↓intelligence␈α
is␈α
a␈αdifficult␈α
branch␈α
of␈αscience␈α
and␈α
engineering,␈α
and,␈αjudging␈α
by␈α
present␈αslow␈α
progress,

␈↓ α∧␈↓it␈α
might␈α∞take␈α
a␈α
long␈α∞time.␈α
The␈α
science␈α∞of␈α
genetics␈α
took␈α∞100␈α
years␈α
between␈α∞Mendel's␈α
experiments

␈↓ α∧␈↓with peas and cracking the DNA code for proteins, and it isn't done yet.

␈↓ α∧␈↓␈↓ αTPresent␈α
machines␈αhave␈α
almost␈αno␈α
emotional␈α
qualities,␈αand,␈α
in␈αmy␈α
opinion,␈α
it␈αwould␈α
be␈αa␈α
bad

␈↓ α∧␈↓idea␈α
to␈αgive␈α
them␈αany.␈α
We␈αhave␈α
enough␈αtrouble␈α
figuring␈αout␈α
our␈αduties␈α
to␈αour␈α
fellow␈αhumans␈α
and

␈↓ α∧␈↓to␈αanimals␈αwithout␈α
creating␈αa␈αbunch␈α
of␈αrobots␈αwith␈α
qualities␈αthat␈αwould␈α
allow␈αanyone␈αto␈αfeel␈α
sorry

␈↓ α∧␈↓for them or would allow them to feel sorry for themselves.

␈↓ α∧␈↓␈↓ αTSince␈α
I␈α
advocate␈α
some␈α
anthropomorphism,␈α
I'd␈α
better␈α
explain␈α
what␈α
I␈α
consider␈α
good␈α∞or␈α
bad

␈↓ α∧␈↓anthropomorphism.␈α∞Anthropomorphism␈α∞is␈α∂the␈α∞ascription␈α∞of␈α∂human␈α∞characteristics␈α∞to␈α∂things␈α∞not

␈↓ α∧␈↓human.␈αWhen␈α
is␈αit␈α
a␈αgood␈α
idea␈αto␈α
do␈αthis?␈α
When␈αit␈α
says␈αsomething␈α
that␈αcannot␈α
as␈αconveniently␈α
be

␈↓ α∧␈↓said some other way.
␈↓ α∧␈↓␈↓ u3


␈↓ α∧␈↓␈↓ αTDon't␈α∞get␈α
me␈α∞wrong.␈α
The␈α∞kind␈α
of␈α∞anthropomorphism␈α
where␈α∞someone␈α
says,␈α∞"This␈α
terminal

␈↓ α∧␈↓hates␈αme!"␈αand␈αbashes␈αit,␈αis␈αjust␈αas␈αsilly␈αas␈αever.␈αIt␈αis␈αalso␈αcommon␈αto␈αascribe␈αpersonalities␈αto␈αcars,

␈↓ α∧␈↓boats,␈αand␈α
other␈αmachinery.␈αIt␈α
is␈αhard␈αto␈α
say␈αwhether␈α
anyone␈αtakes␈αthis␈α
seriously.␈αAnyway,␈αI'm␈α
not

␈↓ α∧␈↓supporting any of these things.

␈↓ α∧␈↓␈↓ αTThe␈αreason␈αfor␈α
ascribing␈αmental␈αqualities␈α
and␈αmental␈αprocesses␈αto␈α
machines␈αis␈αthe␈α
same␈αas

␈↓ α∧␈↓for␈α∞ascribing␈α
them␈α∞to␈α
other␈α∞people.␈α
It␈α∞helps␈α
understand␈α∞what␈α
they␈α∞will␈α
do,␈α∞how␈α
our␈α∞actions␈α
will

␈↓ α∧␈↓affect them, and how to compare them with ourselves.

␈↓ α∧␈↓␈↓ αTResearchers␈α∪in␈α∩artificial␈α∪intelligence␈α∪(AI)␈α∩are␈α∪interested␈α∩in␈α∪the␈α∪use␈α∩of␈α∪mental␈α∪terms␈α∩to

␈↓ α∧␈↓describe␈α
machines␈αfor␈α
two␈α
reasons.␈αFirst,␈α
we'd␈α
like␈αto␈α
provide␈α
machines␈αwith␈α
theories␈αof␈α
knowledge

␈↓ α∧␈↓and␈α
belief␈αso␈α
they␈αcan␈α
reason␈αabout␈α
what␈α
their␈αusers␈α
know,␈αdon't␈α
know,␈αand␈α
want.␈α
Second,␈αwhat

␈↓ α∧␈↓the user knows about the machine can often best be expressed using mental terms.

␈↓ α∧␈↓␈↓ αTSuppose␈αI'm␈αusing␈αan␈αautomatic␈αteller␈αmachine␈αat␈αmy␈αbank.␈αI␈αmay␈αmake␈αstatements␈αabout␈αit

␈↓ α∧␈↓like,␈α"It␈αwon't␈α
give␈αme␈αany␈αcash␈α
because␈αit␈αknows␈α
there's␈αno␈αmoney␈αin␈α
my␈αaccount,"␈αor,␈α
"It␈αknows

␈↓ α∧␈↓who␈α∂I␈α∂am␈α∂because␈α∂I␈α∂gave␈α∂it␈α∂my␈α∂secret␈α∂number."␈α∂It␈α∂is␈α∂not␈α∂true␈α∂that␈α∂the␈α∂teller␈α∂machine␈α⊂has␈α∂the

␈↓ α∧␈↓thought,␈α
"There's␈α
no␈α
money␈αin␈α
his␈α
account,"␈α
and␈αso␈α
refuses␈α
to␈α
give␈α
me␈αcash.␈α
But␈α
it␈α
was␈αdesigned␈α
to

␈↓ α∧␈↓act␈αas␈αif␈αit␈αdoes,␈αand␈αif␈αI␈αwant␈αto␈αfigure␈αout␈αhow␈αto␈αmake␈αit␈αgive␈αme␈αcash␈αin␈αthe␈αfuture,␈αI␈αshould

␈↓ α∧␈↓treat␈αit␈α
as␈αif␈αit␈α
knows␈αthat␈α
sort␈αof␈αthing.␈α
"If␈αit␈α
walks␈αlike␈αa␈α
duck␈αand␈α
quacks␈αlike␈αa␈α
duck,␈αthen␈αit's␈α
a

␈↓ α∧␈↓duck."

␈↓ α∧␈↓␈↓ αTIt's␈α
difficult␈α
to␈α
be␈α
rigorous␈α
about␈α
whether␈α
a␈α
machine␈α
really␈α
"knows",␈α
"thinks",␈α∞etc.,␈α
because

␈↓ α∧␈↓we're␈αhard␈αput␈αto␈αdefine␈αthese␈αthings.␈αWe␈αunderstand␈αhuman␈αmental␈αprocesses␈αonly␈αslightly␈αbetter

␈↓ α∧␈↓than a fish understands swimming.

␈↓ α∧␈↓␈↓ αTConsider␈α
the␈α
following␈α
example␈α
of␈α
the␈αinstructions␈α
that␈α
came␈α
with␈α
an␈α
electric␈αblanket.␈α
"Place

␈↓ α∧␈↓the␈α
control␈α
near␈α
the␈α
bed␈α
in␈α
a␈α
place␈α
that␈α∞is␈α
neither␈α
hotter␈α
nor␈α
colder␈α
than␈α
the␈α
room␈α
itself.␈α∞If␈α
the

␈↓ α∧␈↓control␈αis␈αplaced␈αon␈αa␈αradiator␈αor␈αradiant␈αheated␈αfloors,␈αit␈αwill␈α"think"␈αthe␈αentire␈αroom␈αis␈αhot␈αand
␈↓ α∧␈↓␈↓ u4


␈↓ α∧␈↓will␈α∞lower␈α∂your␈α∞blanket␈α∞temperature,␈α∂making␈α∞your␈α∂bed␈α∞too␈α∞cold.␈α∂If␈α∞the␈α∞control␈α∂is␈α∞placed␈α∂on␈α∞the

␈↓ α∧␈↓window␈αsill␈αin␈αa␈αcold␈αdraft,␈αit␈αwill␈α"think"␈αthe␈αentire␈αroom␈αis␈αcold␈αand␈αwill␈αheat␈αup␈αyour␈αbed␈αso␈αit

␈↓ α∧␈↓will be too hot."

␈↓ α∧␈↓␈↓ αTI␈α∞suppose␈α
some␈α∞philosophers,␈α∞psychologists,␈α
and␈α∞English␈α∞teachers␈α
would␈α∞maintain␈α∞that␈α
the

␈↓ α∧␈↓blanket␈α∂manufacturer␈α∂is␈α∞guilty␈α∂of␈α∂anthropomorphism␈α∞and␈α∂some␈α∂will␈α∞claim␈α∂that␈α∂great␈α∂harm␈α∞can

␈↓ α∧␈↓come␈αfrom␈αthus␈αascribing␈αto␈αmachines␈αqualities␈αwhich␈αonly␈αhumans␈αcan␈αhave.␈αI␈αargue␈αthat␈αsaying

␈↓ α∧␈↓that␈αthe␈α
blanket␈αthermostat␈α
"thinks"␈αis␈α
ok;␈αthey␈αcould␈α
even␈αhave␈α
left␈αoff␈α
the␈αquotes.␈αMoreover,␈α
this

␈↓ α∧␈↓helps␈α⊂us␈α∂understand␈α⊂how␈α∂the␈α⊂thermostat␈α∂works.␈α⊂The␈α∂example␈α⊂is␈α∂extreme,␈α⊂because␈α⊂most␈α∂people

␈↓ α∧␈↓don't␈α∂need␈α∞the␈α∂word␈α∞"think"␈α∂to␈α∞understand␈α∂how␈α∞a␈α∂thermostatic␈α∞control␈α∂works.␈α∂Nevertheless,␈α∞the

␈↓ α∧␈↓blanket manufacturer was probably right in thinking that it would help some users.

␈↓ α∧␈↓␈↓ αTKeep␈α⊃in␈α⊃mind␈α⊃that␈α∩the␈α⊃thermostat␈α⊃can␈α⊃only␈α⊃be␈α∩properly␈α⊃considered␈α⊃to␈α⊃have␈α∩just␈α⊃three

␈↓ α∧␈↓possible␈α
thoughts␈α
or␈αbeliefs.␈α
It␈α
may␈αbelieve␈α
that␈α
the␈α
room␈αis␈α
too␈α
hot,␈αor␈α
that␈α
it␈α
is␈αtoo␈α
cold,␈α
or␈αthat␈α
it

␈↓ α∧␈↓is ok. It has no other beliefs; for example, it does not believe that it is a thermostat.

␈↓ α∧␈↓␈↓ αTThe␈α
example␈α∞of␈α
the␈α∞thermostat␈α
is␈α∞a␈α
very␈α
simple␈α∞one.␈α
If␈α∞we␈α
had␈α∞only␈α
thermostats␈α∞to␈α
think

␈↓ α∧␈↓about,␈αwe␈α
wouldn't␈αbother␈αwith␈α
the␈αconcept␈αof␈α
belief␈αat␈αall.␈α
And␈αif␈αall␈α
we␈αwanted␈αto␈α
think␈αabout

␈↓ α∧␈↓were zero and one, we wouldn't bother with the concept of number.

␈↓ α∧␈↓␈↓ αTThe␈α∂automatic␈α∂teller␈α∂is␈α⊂a␈α∂slightly␈α∂more␈α∂complicated␈α⊂example.␈α∂It␈α∂has␈α∂beliefs␈α⊂like,␈α∂"There's

␈↓ α∧␈↓enough␈α
money␈α
in␈α
the␈α
account,"␈αand␈α
"I␈α
don't␈α
give␈α
out␈α
that␈αmuch␈α
money".␈α
We␈α
can␈α
imagine␈αa␈α
slightly

␈↓ α∧␈↓fancier␈α∂automatic␈α∂teller␈α∂that␈α∂handles␈α∂loans,␈α∞loan␈α∂payments,␈α∂traveler's␈α∂checks,␈α∂and␈α∂so␈α∂forth,␈α∞with

␈↓ α∧␈↓beliefs like, "The payment wasn't made on time," or, "This person is a good credit risk."

␈↓ α∧␈↓␈↓ αTThe␈αnext␈α
example␈αis␈αadapted␈α
from␈αthe␈α
University␈αof␈αCalifornia␈α
philosopher␈αJohn␈αSearle.␈α
A

␈↓ α∧␈↓person␈α
who␈α
doesn't␈αknow␈α
Chinese␈α
memorizes␈αa␈α
book␈α
of␈αrules␈α
for␈α
manipulating␈αChinese␈α
characters.

␈↓ α∧␈↓The␈α∞rules␈α∞tell␈α∂him␈α∞how␈α∞to␈α∂extract␈α∞certain␈α∞parts␈α∂of␈α∞a␈α∞sequence␈α∂of␈α∞characters,␈α∞how␈α∂to␈α∞re-arrange

␈↓ α∧␈↓them,␈αand␈α
how␈αfinally␈α
to␈αsend␈α
back␈αanother␈α
sequence␈αof␈α
characters.␈αThese␈α
rules␈αsay␈αnothing␈α
about
␈↓ α∧␈↓␈↓ u5


␈↓ α∧␈↓the␈α
meaning␈α
of␈α
the␈α
characters,␈α
just␈α
how␈α
to␈α
compute␈α
with␈α
them.␈α
 He␈α
is␈α
repeatedly␈α∞given␈α
Chinese

␈↓ α∧␈↓sentences,␈αto␈αwhich␈αhe␈αapplies␈αthe␈αrules,␈αand␈αgives␈αback␈αwhat␈αturn␈αout,␈αbecause␈αof␈αthe␈αclever␈αrules

␈↓ α∧␈↓to␈α⊃be␈α⊃Chinese␈α⊃sentences␈α∩that␈α⊃are␈α⊃appropriate␈α⊃replies.␈α⊃ We␈α∩suppose␈α⊃that␈α⊃the␈α⊃rules␈α⊃result␈α∩in␈α⊃a

␈↓ α∧␈↓Chinese␈α∞conversation␈α∞so␈α∞intelligent␈α∂that␈α∞the␈α∞person␈α∞giving␈α∂and␈α∞receiving␈α∞the␈α∞sentences␈α∂can't␈α∞tell

␈↓ α∧␈↓him␈α∀from␈α∃an␈α∀intelligent␈α∀Chinese.␈α∃ This␈α∀is␈α∀analogous␈α∃to␈α∀a␈α∀computer,␈α∃which␈α∀only␈α∃obeys␈α∀its

␈↓ α∧␈↓programming␈α⊂language,␈α⊂but␈α⊂can␈α⊂be␈α⊂programmed␈α⊂such␈α∂that␈α⊂one␈α⊂can␈α⊂communicate␈α⊂with␈α⊂it␈α⊂in␈α∂a

␈↓ α∧␈↓different␈αprogramming␈αlanguage,␈αor␈αin␈αEnglish.␈α Searle␈αsays␈αthat␈αsince␈αthe␈αperson␈αin␈αthe␈αexample

␈↓ α∧␈↓doesn't␈α∂understand␈α∂Chinese␈α∂-␈α∞even␈α∂though␈α∂he␈α∂can␈α∞produce␈α∂intelligent␈α∂Chinese␈α∂conversation␈α∞by

␈↓ α∧␈↓following␈α∂rules␈α∂-␈α∂a␈α∂computer␈α∂cannot␈α∂be␈α∂said␈α∂to␈α∂"understand"␈α∂things.␈α∂ He␈α∂makes␈α∂no␈α∂distinction,

␈↓ α∧␈↓however,␈αbetween␈αthe␈αhardware␈α(the␈αperson)␈αand␈αthe␈αprocess␈α(the␈αset␈αof␈αrules).␈αI␈αwould␈αargue␈αthat

␈↓ α∧␈↓the␈α⊃set␈α⊃of␈α⊃rules␈α∩understands␈α⊃Chinese,␈α⊃and,␈α⊃analogously,␈α⊃a␈α∩computer␈α⊃program␈α⊃may␈α⊃be␈α∩said␈α⊃to

␈↓ α∧␈↓understand␈α
things,␈α
even␈α
if␈α
the␈αcomputer␈α
does␈α
not.␈α
 Both␈α
Searle␈αand␈α
I␈α
are␈α
ignoring␈α
any␈αpractical

␈↓ α∧␈↓difficulties.

␈↓ α∧␈↓␈↓ αTDaniel␈α∃Dennett,␈α∀Tufts␈α∃University␈α∃philosopher,␈α∀has␈α∃proposed␈α∀three␈α∃attitudes␈α∃aimed␈α∀at

␈↓ α∧␈↓understanding a system with which one interacts.

␈↓ α∧␈↓␈↓ αTThe␈αfirst␈αhe␈αcalls␈αthe␈α
physical␈αstance.␈αIn␈αthis␈αwe␈αlook␈α
at␈αthe␈αsystem␈αin␈αterms␈αof␈α
its␈αphysical

␈↓ α∧␈↓structure␈α∂at␈α∞various␈α∂levels␈α∞of␈α∂organization.␈α∞ Its␈α∂parts␈α∞have␈α∂their␈α∞properties␈α∂and␈α∞they␈α∂interact␈α∞in

␈↓ α∧␈↓ways␈α∞that␈α∞we␈α∞know␈α∂about.␈α∞In␈α∞principle␈α∞the␈α∂analysis␈α∞can␈α∞go␈α∞down␈α∂to␈α∞the␈α∞atoms␈α∞and␈α∂their␈α∞parts.

␈↓ α∧␈↓Looking␈α∂at␈α∞a␈α∂thermostat␈α∞from␈α∂this␈α∞point␈α∂of␈α∞view,␈α∂we'd␈α∞want␈α∂to␈α∞understand␈α∂the␈α∞working␈α∂of␈α∞the

␈↓ α∧␈↓bimetal␈α∪strip␈α∩that␈α∪most␈α∩thermostats␈α∪use.␈α∪For␈α∩the␈α∪automatic␈α∩teller,␈α∪we'd␈α∩want␈α∪to␈α∪know␈α∩about

␈↓ α∧␈↓integrated circuitry, for one thing. (Let's hope no one's in line behind us while we do this.

␈↓ α∧␈↓␈↓ αTThe␈α
second␈α
is␈αcalled␈α
the␈α
design␈αstance.␈α
In␈α
this␈αwe␈α
analyze␈α
something␈αin␈α
terms␈α
of␈αthe␈α
purpose

␈↓ α∧␈↓for␈αwhich␈α
it␈αis␈α
designed.␈αDennett's␈αexample␈α
of␈αthis␈α
is␈αthe␈αalarm␈α
clock.␈αWe␈α
can␈αusually␈α
figure␈αout

␈↓ α∧␈↓what␈α∂an␈α∂alarm␈α∂clock␈α∂will␈α⊂do,␈α∂e.g.␈α∂when␈α∂it␈α∂will␈α∂go␈α⊂off,␈α∂without␈α∂knowing␈α∂whether␈α∂it␈α∂is␈α⊂made␈α∂of
␈↓ α∧␈↓␈↓ u6


␈↓ α∧␈↓springs␈αand␈αgears␈αor␈αof␈αinegrated␈αcircuits.␈αThe␈αuser␈αof␈αalarm␈αclock␈αtypically␈αdoesn't␈αknow␈αor␈αcare

␈↓ α∧␈↓much␈α
about␈α
its␈αinternal␈α
structure,␈α
and␈α
this␈αinformation␈α
wouldn't␈α
be␈α
of␈αmuch␈α
use.␈α
Notice␈αthat␈α
when

␈↓ α∧␈↓an␈α∩alarm␈α⊃clock␈α∩breaks,␈α⊃its␈α∩repair␈α⊃requires␈α∩taking␈α⊃the␈α∩physical␈α⊃stance.␈α∩The␈α⊃design␈α∩stance␈α⊃can

␈↓ α∧␈↓usefully␈α
be␈αapplied␈α
to␈αa␈α
thermostat␈α-␈α
it␈αshouldn't␈α
be␈α
too␈αhard␈α
to␈αfigure␈α
out␈αhow␈α
to␈αset␈α
it,␈αno␈α
matter

␈↓ α∧␈↓how it works. With the automatic teller, things a little less clear.

␈↓ α∧␈↓␈↓ αTThe␈α⊃design␈α⊃stance␈α⊃is␈α⊃appropriate␈α⊃not␈α⊃only␈α∩for␈α⊃machinery␈α⊃but␈α⊃also␈α⊃for␈α⊃the␈α⊃parts␈α∩of␈α⊃an

␈↓ α∧␈↓organism.␈α
It␈αis␈α
amusing␈α
that␈αwe␈α
can't␈α
attribute␈αa␈α
purpose␈α
for␈αthe␈α
existence␈α
of␈αants,␈α
but␈α
we␈αcan␈α
find

␈↓ α∧␈↓a purpose for the glands in an ant that emit a chemical substance for other ants to follow.

␈↓ α∧␈↓␈↓ αTThe␈α_third␈α↔is␈α_called␈α↔the␈α_intentional␈α↔stance,␈α_and␈α↔this␈α_is␈α↔what␈α_we'll␈α↔often␈α_need␈α↔for

␈↓ α∧␈↓understanding␈α⊂computer␈α⊂programs.␈α⊂In␈α⊂this␈α⊂we␈α⊂try␈α⊂to␈α⊂understand␈α⊂the␈α⊂behavior␈α⊂of␈α⊂a␈α⊃system␈α⊂by

␈↓ α∧␈↓ascribing␈α
to␈αit␈α
beliefs,␈αgoals,␈α
intentions,␈αlikes␈α
and␈αdislikes,␈α
and␈αother␈α
mental␈αqualities.␈α
In␈αthis␈α
stance

␈↓ α∧␈↓we␈αask␈αourselves␈αwhat␈αthe␈αthermostat␈αthinks␈αis␈αgoing␈αon,␈αwhat␈αthe␈αautomatic␈αteller␈αwants␈αfrom␈αus

␈↓ α∧␈↓before␈α
it'll␈α
give␈α
us␈α
cash.␈α
We␈α
say␈α
things␈α
like,␈α"The␈α
store's␈α
billing␈α
computer␈α
wants␈α
me␈α
to␈α
pay␈α
up,␈αso␈α
it

␈↓ α∧␈↓intends␈α∞to␈α∂frighten␈α∞me␈α∂by␈α∞sending␈α∂me␈α∞threatening␈α∞letters.␈α∂The␈α∞intentional␈α∂stance␈α∞is␈α∂most␈α∞useful

␈↓ α∧␈↓when it is the only way of expressing what we know about a system.

␈↓ α∧␈↓␈↓ αT(For␈α∂variety␈α∞Dennett␈α∂mentions␈α∞the␈α∂astrological␈α∞stance.␈α∂In␈α∞this␈α∂the␈α∞way␈α∂to␈α∞think␈α∂about␈α∞the

␈↓ α∧␈↓future␈α∞of␈α∞a␈α∞human␈α∞is␈α∞to␈α∞pay␈α∞attention␈α∞to␈α∞the␈α∞configuration␈α∞of␈α∞the␈α∞stars␈α∞when␈α∞he␈α∞was␈α∂born.␈α∞To

␈↓ α∧␈↓determine␈αwhether␈αan␈αenterprise␈αwill␈αsucceed␈αwe␈αdetermine␈αwhether␈αthe␈αsigns␈αare␈αfavorable.␈α
 This

␈↓ α∧␈↓stance is clearly distinct from the others - and worthless.)

␈↓ α∧␈↓␈↓ αTThe␈αmental␈αqualities␈α
of␈αpresent␈αmachines␈α
are␈αnot␈αthe␈α
same␈αas␈αours.␈α
 While␈αwe␈αwill␈α
probably

␈↓ α∧␈↓be␈αable,␈αin␈αthe␈αfuture,␈αto␈αmake␈α
machines␈αwith␈αmental␈αqualities␈αmore␈αlike␈αour␈αown,␈α
we'll␈αprobably

␈↓ α∧␈↓never␈αwant␈αto␈αdeal␈αwith␈αmachines␈αthat␈αare␈αtoo␈αmuch␈αlike␈αus.␈αWho␈αwants␈αto␈αdeal␈αwith␈αa␈αcomputer

␈↓ α∧␈↓that␈α∂loses␈α∂its␈α⊂temper,␈α∂or␈α∂an␈α⊂automatic␈α∂teller␈α∂that␈α∂falls␈α⊂in␈α∂love?␈α∂Computers␈α⊂will␈α∂end␈α∂up␈α⊂the␈α∂the

␈↓ α∧␈↓psychology␈αthat␈αis␈α
convenient␈αto␈αtheir␈αdesigners␈α
-␈α(and␈αthey'll␈αbe␈α
fascist␈αbastards␈αif␈αthose␈α
designers
␈↓ α∧␈↓␈↓ u7


␈↓ α∧␈↓don't␈αthink␈αtwice.)␈αProgram␈αdesigners␈αhave␈αa␈αtendency␈αto␈αthink␈αof␈αthe␈αusers␈αas␈αidiots␈αwho␈αneed␈αto

␈↓ α∧␈↓be␈α
controlled␈α
rather␈α
than␈α
thinking␈α
of␈α
their␈α
program␈α
as␈α
a␈α
servant,␈α
whose␈α
master␈α
the␈α
user,␈αshould␈α
be

␈↓ α∧␈↓able␈α
to␈α
control␈α
it.␈α
If␈α
designers␈α
and␈α
programmers␈α
think␈α
about␈α
the␈α
apparent␈α
mental␈α
qualities␈α
that␈α
are

␈↓ α∧␈↓of their programs, they'll create programs that are easier and pleasanter to deal with.